Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
Lecture Notes in Electrical Engineering ; 954:421-430, 2023.
Article in English | Scopus | ID: covidwho-20233444

ABSTRACT

This paper proposes a novel and robust technique for remote cough recognition for COVID-19 detection. This technique is based on sound and image analysis. The objective is to create a real-time system combining artificial intelligence (AI) algorithms, embedded systems, and network of sensors to detect COVID-19-specific cough and identify the person who coughed. Remote acquisition and analysis of sounds and images allow the system to perform both detection and classification of the detected cough using AI algorithms and image processing to identify the coughing person. This will give the ability to distinguish between a normal person and a person carrying the COVID-19 virus. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

2.
Comput Biol Med ; 163: 107153, 2023 Jun 08.
Article in English | MEDLINE | ID: covidwho-20233898

ABSTRACT

This study proposes a new deep learning-based method that demonstrates high performance in detecting Covid-19 disease from cough, breath, and voice signals. This impressive method, named CovidCoughNet, consists of a deep feature extraction network (InceptionFireNet) and a prediction network (DeepConvNet). The InceptionFireNet architecture, based on Inception and Fire modules, was designed to extract important feature maps. The DeepConvNet architecture, which is made up of convolutional neural network blocks, was developed to predict the feature vectors obtained from the InceptionFireNet architecture. The COUGHVID dataset containing cough data and the Coswara dataset containing cough, breath, and voice signals were used as the data sets. The pitch-shifting technique was used to data augmentation the signal data, which significantly contributed to improving performance. Additionally, Chroma features (CF), Root mean square energy (RMSE), Spectral centroid (SC), Spectral bandwidth (SB), Spectral rolloff (SR), Zero crossing rate (ZCR), and Mel frequency cepstral coefficients (MFCC) feature extraction techniques were used to extract important features from voice signals. Experimental studies have shown that using the pitch-shifting technique improved performance by around 3% compared to raw signals. When the proposed model was used with the COUGHVID dataset (Healthy, Covid-19, and Symptomatic), a high performance of 99.19% accuracy, 0.99 precision, 0.98 recall, 0.98 F1-Score, 97.77% specificity, and 98.44% AUC was achieved. Similarly, when the voice data in the Coswara dataset was used, higher performance was achieved compared to the cough and breath studies, with 99.63% accuracy, 100% precision, 0.99 recall, 0.99 F1-Score, 99.24% specificity, and 99.24% AUC. Moreover, when compared with current studies in the literature, the proposed model was observed to exhibit highly successful performance. The codes and details of the experimental studies can be accessed from the relevant Github page: (https://github.com/GaffariCelik/CovidCoughNet).

3.
Signals and Communication Technology ; : 185-205, 2023.
Article in English | Scopus | ID: covidwho-2270383

ABSTRACT

COVID-19 has been a major issue among various countries, and it has already affected millions of people across the world and caused nearly 4 million deaths. Various precautionary measures should be taken to bring the cases under control, and the easiest way for diagnosing the diseases should also be identified. An accurate analysis of CT has to be done for the treatment of COVID-19 infection, and this process is complex and it needs much attention from the specialist. It is also proved that the covid infection can be identified with the breathing sounds of the patient. A new framework was proposed for diagnosing COVID-19 using CT images and breathing sounds. The entire network is designed to predict the class as normal, COVID-19, bacterial pneumonia, and viral pneumonia using the multiclass classification network MLP. The proposed framework has two modules: (i) respiratory sound analysis framework and (ii) CT image analysis framework. These modules exhibit the workflow for data gathering, data preprocessing, and the development of the deep learning model (deep CNN + MLP). In respiratory sound analysis framework, the gathered audio signals are converted to spectrogram video using FFT analyzer. Features like MFCCs, ZCR, log energies, and Kurtosis are needed to be extracted for identifying dry/wet coughs, variability present in the signal, prevalence of higher amplitudes, and for increasing the performance in audio classification. All these features are extracted with the deep CNN architecture with the series of convolution, pooling, and ReLU (rectified linear unit) layers. Finally, the classification is done with a multilayer perceptron (MLP) classifier. In parallel to this, the diagnosis of the disease is improved by analyzing the CT images. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.

4.
Pulmonologiya ; 32(6):834-841, 2022.
Article in Russian | EMBASE | ID: covidwho-2253226

ABSTRACT

Cough is a frequent manifestation of COVID-19 (COronaVIrus Disease 2019), therefore, it has an important diagnostic value. There is little information about the characteristics of cough of COVID-19 patients in the literature. To perform a spectral analysis of cough sounds in COVID-19 patients in comparison with induced cough of healthy individuals. Methods. The main group consisted of 218 COVID-19 patients (48.56% - men, 51.44% - women, average age 40.2 (32.4;50.1) years). The comparison group consisted of 60 healthy individuals (50.0% men, 50.0% women, average age 41.7 (31.2;53.0) years) who were induced to cough. Each subject had a cough sound recorded, followed by digital processing using a fast Fourier transform algorithm. The temporal-frequency parameters of cough sounds were evaluated: duration (ms), the ratio of the energy of low and medium frequencies (60 - 600 Hz) to the energy of high frequencies (600 - 6 000 Hz), the frequency of the maximum sound energy (Hz). These parameters were determined in relation to both the entire cough and individual phases of the cough sound. Results. Significant differences were found between some cough parameters in the main group and in the comparison group. The total duration of the coughing act was significantly shorter in patients with COVID-19, in contrast to the induced cough of healthy individuals (T = 342.5 (277.0;394.0) - in the main group;T (c) = 400.5 (359.0;457.0) - in the comparison group;p = 0.0000). In addition, it was found that the cough sounds of COVID-19 patients are dominated by the energy of higher frequencies as compared to the healthy controls (Q = 0.3095 (0.223;0.454) - in the main group;Q (c) = 0.4535 (0.3725;0.619) - in the comparison group;p = 0.0000). The maximum frequency of cough sound energy in the main group was significantly higher than in the comparison group (Fmax = 463.0 (274.0;761.0) - in the main group;Fmax = 347 (253.0;488.0) - in the comparison group;p = 0.0013). At the same time, there were no differences between the frequencies of the maximum energy of cough sound of the individual phases of cough act and the duration of the first phase. Conclusion. The cough of patients with COVID-19 is characterized by a shorter duration and a predominance of high-frequency energy compared to the induced cough of healthy individuals.Copyright © 2022 Budnevsky A.V. et al.

5.
The Lancet Infectious Diseases ; 23(1):43.0, 2023.
Article in English | Scopus | ID: covidwho-2243761
6.
IEEE Transactions on Artificial Intelligence ; : 1-20, 2022.
Article in English | Scopus | ID: covidwho-2192072

ABSTRACT

Coronavirus (COVID-19) is an ecumenical pandemic that has affected the whole world drastically by raising a global calamitous situation. Due to this pernicious disease, millions of people have lost their lives. The scientists are still far from knowing how to tackle the coronavirus due to its multiple mutations found around the globe. Standard testing technique called Polymerase Chain Reaction (PCR) for the clinical diagnosis of COVID-19 is expensive and time consuming. However, to assist specialists and radiologists in COVID-19 detection and diagnosis, deep learning plays an important role. Many research efforts have been done that leverage deep learning techniques and technologies for the identification or categorization of COVID-19 positive patients, and these techniques are proved to be a powerful tool that can automatically detect or diagnose COVID-19 cases. In this paper, we identify significant challenges regarding deep learning-based systems and techniques that use different medical imaging modalities, including Cough and Breadth, Chest X-ray, and Computer Tomography (CT) to combat COVID-19 outbreak. We also pinpoint important research questions for each category of challenges. IEEE

7.
27th IEEE Symposium on Computers and Communications, ISCC 2022 ; 2022-June, 2022.
Article in English | Scopus | ID: covidwho-2120546

ABSTRACT

Detection of COVID-19 has been a global challenge due to the lack of proper resources across all regions. Recently, research has been conducted for non-invasive testing of COVID-19 using an individual's cough audio as input to deep learning models. However, these methods do not pay sufficient attention to resource and infrastructure constraints for real-life practical deployment and the lack of focus on maintaining user data privacy makes these solutions unsuitable for large-scale use. We propose a resource-efficient CoviFL framework using an AIoMT approach for remote COVID-19 detection while maintaining user data privacy. Federated learning has been used to decentralize the CoviFL CNN model training and test the COVID-19 status of users with an accuracy of 93.01 % on portable AIoMT edge devices. Experiments on real-world datasets suggest that the proposed CoviFL solution is promising for large-scale deployment even in resource and infrastructure-constrained environments making it suitable for remote COVID-19 detection. © 2022 IEEE.

8.
30th Signal Processing and Communications Applications Conference, SIU 2022 ; 2022.
Article in Turkish | Scopus | ID: covidwho-2052077

ABSTRACT

COVID-19 virus;has dragged the world into an epidemic that has infected more than 413 million people and caused the death of nearly 6 million people. Although biomedical tests provide the diagnosis of COVID-19 with high accuracy in the diagnosis of the disease, it increases the risk of infection due to the fact that it is a method that requires contact. Machine learning models have been proposed as an alternative to biomedical testing. Cough has been identified by the World Health Organization as one of the symptoms of COVID-19 disease. In this study, the success performance of the positive case situation with machine learning was examined using the COUGHVID dataset with cough voice recordings. In order to increase the performance of the model, MFCC, Δ-MFCC and Mel Coefficients attributes were obtained after preprocessing the sound recordings. In the ensemble learning model, features were used as independent variables and a value of 0.65 AUC-ROC was reached. In addition to these performance-enhancing changes, since the acoustic properties of male and female cough sounds are different, the training of persons was carried out separately from each other, and AUC-ROC values of 0.70 for females and 0.68 for males were obtained. Trimming the silent regions at the beginning and end of the recordings, using the ensemble learning model, and grouping based on gender provided better results for this study compared to previous studies. © 2022 IEEE.

9.
IEEE International Instrumentation and Measurement Technology Conference (I2MTC) ; 2021.
Article in English | Web of Science | ID: covidwho-1978391

ABSTRACT

Sleep problem is currently a norm for many people, especially during this Covid-19 pandemic. Due to the limited number of sleep medicine studies, most people were unaware and just ignored their sleep problems. The use of polysomnography (PSG) in sleep medicine is quite popular, but due to its disturbance towards the subjects, it may decrease the subjects' sleep quality and may affect the result accuracy since it needs to be attached to the subjects' body. This work proposed a smart alarm based on the sleep cycle using speech analysis that uses a non-contact device, which is an undirected microphone of the Google AIY Voice Kit with Raspberry Pi. The microphone will be used to record the subjects' sleep sounds and detect the subjects' sleep cycle. The system will trigger a speaker attached to the Google Voice Kit to produce a sound to wake up the subject when they complete a particular sleep cycle according to their preference. Results showed that the system could detect sounds when subjects were sleeping and show a subject's sleep pattern. Whenever the subject past specific minutes, the sound amplitude is increased by 3 dB. These results indicate that subject is likely having their REM stages, and after 10 minutes, the subject will complete a sleep cycle.

10.
2nd International Conference on Advanced Research in Computing, ICARC 2022 ; : 242-247, 2022.
Article in English | Scopus | ID: covidwho-1831775

ABSTRACT

Diagnosing and treating lung diseases can be challenging since the signs and symptoms of a wide range of medical conditions can indicate interstitial lung diseases. Respiratory diseases impose an immense worldwide health burden. It is even more deadly when considering COVID-19 in present times. Auscultation is the most common and primary method of respiratory disease diagnosis. It is known to be non-expensive, non-invasive, safe, and takes less time for diagnosis. However, diagnosis accuracy using auscultation is subjective to the experience and knowledge of the physician, and it requires extensive training. This study proposes a solution developed for respiratory disease diagnosis. 'smart Stethoscope' is an intelligent platform for providing assistance in respiratory disease diagnosis and training of novice physicians, which is powered by state-of-the-art artificial intelligence. This system performs 3 main functions(modes). These 3 modes are a unique aspect of this study. The real-time prediction mode provides real-time respiratory diagnosis predictions for lung sounds collected via auscultation. The offline training mode is for trainee doctors and medical students. Finally, the expert mode is used to continuously improve the system's prediction performance by getting validations and evaluations from pulmonologists. The smart stethoscope's respiratory disease diagnosis prediction model is developed by combining a state-of-the-art neural network with an ensembling convolutional recurrent neural network. The proposed convolutional Bi-directional Long Short-Term Memory (C- Bi LSTM) model achieved an accuracy of 98% on 6 class classification of breathing cycles for ICBHF17 scientific challenge respiratory sound database. The novelty of the project lies on the whole platform which provides different functionalities for a diverse hierarchy of medical professionals which supported by a state-of-the-art prediction model based on Deep Learning. © 2022 IEEE.

11.
Front Psychol ; 12: 686738, 2021.
Article in English | MEDLINE | ID: covidwho-1337673
12.
Chaos Solitons Fractals ; 140: 110246, 2020 Nov.
Article in English | MEDLINE | ID: covidwho-950091

ABSTRACT

The development of novel digital auscultation techniques has become highly significant in the context of the outburst of the pandemic COVID 19. The present work reports the spectral, nonlinear time series, fractal, and complexity analysis of vesicular (VB) and bronchial (BB) breath signals. The analysis is carried out with 37 breath sound signals. The spectral analysis brings out the signatures of VB and BB through the power spectral density plot and wavelet scalogram. The dynamics of airflow through the respiratory tract during VB and BB are investigated using the nonlinear time series and complexity analyses in terms of the phase portrait, fractal dimension, Hurst exponent, and sample entropy. The higher degree of chaoticity in BB relative to VB is unwrapped through the maximal Lyapunov exponent. The principal component analysis helps in classifying VB and BB sound signals through the feature extraction from the power spectral density data. The method proposed in the present work is simple, cost-effective, and sensitive, with a far-reaching potential of addressing and diagnosing the current issue of COVID 19 through lung auscultation.

SELECTION OF CITATIONS
SEARCH DETAIL